We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
translated by 谷歌翻译
The diverse demands of different summarization tasks and their high annotation costs are driving a need for few-shot summarization. However, despite the emergence of many summarization tasks and datasets, the current training paradigm for few-shot summarization systems ignores potentially shareable knowledge in heterogeneous datasets. To this end, we propose \textsc{UniSumm}, a unified few-shot summarization model pre-trained with multiple summarization tasks and can be prefix-tuned to excel at any few-shot summarization datasets. Meanwhile, to better evaluate few-shot summarization systems, under the principles of diversity and robustness, we assemble and publicize a new benchmark \textsc{SummZoo}. It consists of $8$ diverse summarization tasks with multiple sets of few-shot samples for each task, covering both monologue and dialogue domains. Experimental results and ablation studies show that \textsc{UniSumm} outperforms strong baseline systems by a large margin across all tasks in \textsc{SummZoo} under both automatic and human evaluations. We release our code and benchmark at \url{https://github.com/microsoft/UniSumm}.
translated by 谷歌翻译
Controllable summarization allows users to generate customized summaries with specified attributes. However, due to the lack of designated annotations of controlled summaries, existing works have to craft pseudo datasets by adapting generic summarization benchmarks. Furthermore, most research focuses on controlling single attributes individually (e.g., a short summary or a highly abstractive summary) rather than controlling a mix of attributes together (e.g., a short and highly abstractive summary). In this paper, we propose MACSum, the first human-annotated summarization dataset for controlling mixed attributes. It contains source texts from two domains, news articles and dialogues, with human-annotated summaries controlled by five designed attributes (Length, Extractiveness, Specificity, Topic, and Speaker). We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization based on hard prompt tuning and soft prefix tuning. Results and analysis demonstrate that hard prompt models yield the best performance on all metrics and human evaluations. However, mixed-attribute control is still challenging for summarization tasks. Our dataset and code are available at https://github.com/psunlpgroup/MACSum.
translated by 谷歌翻译
This study investigates clustered federated learning (FL), one of the formulations of FL with non-i.i.d. data, where the devices are partitioned into clusters and each cluster optimally fits its data with a localized model. We propose a novel clustered FL framework, which applies a nonconvex penalty to pairwise differences of parameters. This framework can automatically identify clusters without a priori knowledge of the number of clusters and the set of devices in each cluster. To implement the proposed framework, we develop a novel clustered FL method called FPFC. Advancing from the standard ADMM, our method is implemented in parallel, updates only a subset of devices at each communication round, and allows each participating device to perform a variable amount of work. This greatly reduces the communication cost while simultaneously preserving privacy, making it practical for FL. We also propose a new warmup strategy for hyperparameter tuning under FL settings and consider the asynchronous variant of FPFC (asyncFPFC). Theoretically, we provide convergence guarantees of FPFC for general nonconvex losses and establish the statistical convergence rate under a linear model with squared loss. Our extensive experiments demonstrate the advantages of FPFC over existing methods.
translated by 谷歌翻译
受益于大规模预处理的视觉语言模型(VL-PMS),视觉问答的性能(VQA)已开始接近人类的甲骨文表现。但是,对VQA数据有限的大规模VL-PM的固定通常面临过度拟合和泛化问题,从而导致缺乏健壮性。在本文中,我们旨在提高VQA系统的鲁棒性(即,当系统对VQA的VL-PMS进行验证时,从信息瓶颈的角度来看,系统能够防御投入变化和人类对抗攻击的能力)。通常,通过VL-PMS获得的内部表示不可避免地包含有关下游VQA任务的无关和冗余信息,从而导致统计上的虚假相关性和对输入变化的不敏感性。为了鼓励表示形式收敛到视觉学习中的足够统计量,我们提出了相关信息瓶颈(CIB)原则,该原则通过最大程度地减少投入和内部表示之间的相互信息(MI)来寻求表示压缩和冗余之间的权衡。同时最大化输出和表示之间的MI。同时,CIB通过对称的关节MI估计来测量视觉和语言输入和表示之间的内部相关性。对五个VQA的投入鲁棒性和两个VQA基准的大量实验证明了拟议CIB在改善VQA系统鲁棒性方面的有效性和优越性。
translated by 谷歌翻译
为了自动纠正手写作业,传统方法是使用OCR模型来识别字符并将其与答案进行比较。 OCR模型在识别手写的汉字时很容易混淆,并且在模型推断过程中缺少答案的文本信息。但是,教师总是考虑到这些答案来审查和纠正作业。在本文中,我们专注于中国披肩测试校正并提出一种多模式方法(命名为AIM)。答案的编码表示与学生笔迹的视觉信息进行了交互。我们没有预测“正确”或“错误”,而是在答案文本上执行序列标记,以推断哪个答案字符与手写内容以细粒度的方式不同。我们将OCR数据集的样本作为此任务的正样本,并开发一种负面样本增强方法来扩展培训数据。实验结果表明,目标的范围优于基于OCR的方法。广泛的研究证明了我们多模式方法的有效性。
translated by 谷歌翻译
需求估计在动态定价中起着重要的作用,在动态定价中,可以通过基于需求曲线最大化收入来获得最佳价格。在在线酒店预订平台中,房间的需求或占用率随着房间类型而变化,随着时间的推移变化,因此获得准确的占用估算是一项挑战。在本文中,我们提出了一种新颖的酒店需求功能,该功能明确地模拟了对占用预测需求需求的价格弹性,并设计了价格弹性预测模型,以了解各种影响因素的动态价格弹性系数。我们的模型由精心设计的弹性学习模块组成,以减轻内生性问题,并在多任务框架中接受培训以解决数据稀疏性。我们在现实世界数据集上进行了全面的实验,并验证方法优于最先进的基准,以实现占用预测和动态定价。
translated by 谷歌翻译
计算机辅助的微创手术在使现代经营剧院受益方面具有巨大的潜力。从内窥镜流传输的视频数据提供了丰富的信息,以支持下一代智能手术系统的上下文意识。为了在手术过程中获得准确的感知和自动操纵,基于学习的技术是一种有希望的方法,近年来可以实现先进的图像分析和场景理解。但是,学习此类模型高度依赖于大规模,高质量和多任务标签的数据。目前,这是该主题的瓶颈,因为可用的公共数据集在CAI领域仍然非常有限。在本文中,我们介绍并发布了第一个具有多个基于图像的感知任务的集成数据集(称为Autolaparo),以促进子宫切除术手术中的基于学习的自动化。我们的Autolaparo数据集是根据整个子宫切除术程序的全长视频开发的。具体而言,数据集中制定了三个不同但高度相关的任务,包括手术工作流识别,腹腔镜运动预测以及仪器和关键解剖学细分。此外,我们还提供了最先进模型的实验结果,作为参考基准,用于该数据集的进一步模型开发和评估。该数据集可从https://autolaparo.github.io获得。
translated by 谷歌翻译
脑电图(EEG)的准确自动分析将在很大程度上有助于临床医生有效监测和诊断患有各种脑部疾病的患者。与使用标记的疾病脑电图数据进行监督的学习相比,可以训练模型以分析特定疾病但无法监测以前看不见的状态,仅基于正常脑电图的异常检测才能检测到新EEG中的任何潜在异常。与现有的异常检测策略不同,这些检测策略在模型开发过程中不考虑任何不可用的异常数据的财产,这里提出了一种面向任务的自我监督学习方法,它可以利用可用的正常脑电图和有关异常EEG的专业知识来培训更有效的EEG随后开发异常检测器的特征提取器。此外,具有较大核的特定两个分支卷积神经网络被设计为特征提取器,因此它可以更容易地提取较大规模和小规模的特征,这些特征通常出现在不可用的异常脑电图中。如三个EEG数据集所示,有效设计和训练的功能提取器已证明能够根据正常数据和未来的新EEG提取更好的特征表示,以根据正常数据和未来的异常检测来开发异常检测器。该代码可在https://github.com/irining/eeg-ad上找到。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译